# Japanese Dialogue Optimization

Qwen2.5 Bakeneko 32b Instruct V2 Gguf
Apache-2.0
This is a quantized version of rinna/qwen2.5-bakeneko-32b-instruct-v2 using llama.cpp, compatible with various llama.cpp-based applications.
Large Language Model Japanese
Q
rinna
597
5
Qwen2.5 Bakeneko 32b Instruct V2
Apache-2.0
An instruction-tuned variant based on Qwen2.5 Bakeneko 32B, enhanced with Chat Vector and ORPO optimization for improved instruction-following capabilities, excelling in Japanese MT-Bench.
Large Language Model Transformers Japanese
Q
rinna
140
6
Qwq Bakeneko 32b Gguf
Apache-2.0
A Japanese dialogue model quantized using llama.cpp based on rinna/qwq-bakeneko-32b, compatible with most llama.cpp-based applications
Large Language Model Japanese
Q
rinna
1,370
6
Qwq Bakeneko 32b
Apache-2.0
A Japanese dialogue model optimized through merging Qwen2.5-32B and QwQ-32B, enhanced with Chat Vector and ORPO technologies for improved instruction following
Large Language Model Transformers Japanese
Q
rinna
1,597
17
Tanuki 8B Dpo V1.0
Apache-2.0
Tanuki-8B is an 8B-parameter Japanese large language model optimized for dialogue tasks through SFT and DPO, developed by GENIAC Matsuo Lab
Large Language Model Transformers Supports Multiple Languages
T
weblab-GENIAC
1,143
41
Llama 3 8B Japanese Instruct
This is a Meta-Llama-3-8B-Instruct model fine-tuned on Japanese dialogue datasets, specializing in Japanese conversational tasks.
Large Language Model Transformers Supports Multiple Languages
L
haqishen
33
22
Suzume Llama 3 8B Japanese
Other
Japanese fine-tuned model based on Llama 3, optimized for Japanese dialogue
Large Language Model Transformers
S
lightblue
2,011
24
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase